perm filename NSF[E80,JMC]2 blob
sn#537923 filedate 1980-09-30 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00010 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .device XGP
C00005 00003 .onecol BEGIN "TITLE PAGE"
C00006 00004 .BEGIN "COVER PAGE"
C00009 00005 .EVERY HEADING(%3{SECNAME},{SSNAME},{PAGE!}) page←0 twocol cb Abstract
C00019 00006 .cb Recent Advances and Changes in Direction of the Research
C00029 00007 .cb Proof-Checking by Computer
C00050 00008 .cb Personnel
C00060 00009 .onecol cb Budget
C00070 00010 .twocol bib
C00075 ENDMK
C⊗;
.device XGP;
.turn on "α%";
.font 1 "baxl30"; font 2 "baxm30"; font 3 "baxb30"; font 4 "baxs30";
. << output a ligature & restore fonts >>
.recursive macro lig(A); ⊂start f_f←thisfont; ("%*%4αA%*%"&f_f) end⊃;
.
.AT "ff" ⊂ IF THISFONT=1 THEN lig(@) ELSE if thisfont=2 then lig(P)
. else if thisfont=3 then lig("`") else "fαf" ⊃;
.AT "ffi" ⊂ IF THISFONT=1 THEN lig(A) ELSE if thisfont=2 then lig(Q)
. else if thisfont=3 then lig(a) else "fαfαi" ⊃;
.AT "ffl" ⊂ IF THISFONT=1 THEN lig(B) ELSE if thisfont=2 then lig(R)
. else if thisfont=3 then lig("b") else "fαfαl" ⊃;
.AT "fi" ⊂ IF THISFONT=1 THEN lig(C) ELSE if thisfont=2 then lig(S)
. else if thisfont=3 then lig(c) else "fαi" ⊃;
.AT "fl" ⊂ IF THISFONT=1 THEN lig(D) ELSE if thisfont=2 then lig(T)
. else if thisfont=3 then lig(d) else "fαl" ⊃;
.AT "--" ⊂ IF THISFONT=1 THEN lig(E) ELSE if thisfont=2 then lig(U)
. else if thisfont=3 then lig(e) else "α-α-" ⊃;
.font 5 "gacs25";
.font 6 "clar30"
.font 9 "sail25"
.sides←1;
.require "twocol.pub[sub,sys]" source_file;
.MACRO YON(LBL) ⊂ "Section "; SUB2! LBL ⊃;
.MACRO BC ⊂ BEGIN PREFACE 0; INDENT 1,4; CRBREAK nojust ⊃
.MACRO BS ⊂ BEGIN PREFACE 0; INDENT 1,4; nojust ⊃
.MACRO SUB(IND) ⊂ INDENT 0,IND; TABS IND+1;⊃
.MACRO IB ⊂ turn on "%";
.AT """" ⊂ (IF THISFONT=1 THEN "%3" ELSE "%1"); ⊃
.AT "<" ⊂ "%2" ⊃; AT ">" ⊂ "%1" ⊃;
. ⊃
.MACRO BIB ⊂ CB(References);
. BEGIN INDENT 0,3; NOJUST; IB;
. AT "AIM-" ⊂ "Stanford AI Memo AαIαMα-" ⊃;
. COUNT exref TO 200
. AT "⊗" ⊂ IF LINES<3 THEN NEXT COLUMN; NEXT EXREF; ("["&EXREF&"] ") ⊃
. ⊃
.
.COUNT ITEM
.AT "#" ⊂NEXT ITEM;(ITEM!);⊃;
.SECNAME←"";
.portion some; place text;
.every heading();
.onecol; BEGIN "TITLE PAGE"
.SKIP TO COLUMN 1;
.CENTER;
.PREFACE 0;
.SELECT 1;
Research Proposal Submitted to
.SKIP 2;
%3THE NATIONAL SCIENCE FOUNDATION
.SKIP 2;
%1for
.SKIP 2;
%3Basic Research in Artificial Intelligence
.SKIP 2;
%1by
.SKIP 2;
John McCarthy
Professor of Computer Science
Principal Investigator
.SKIP 8;
September 1980
.SKIP 5;
Computer Science Department
.SKIP 1;
%3STANFORD UNIVERSITY
.SKIP 1;
%1Stanford, California
.END "TITLE PAGE";
.BEGIN "COVER PAGE"
.SKIP TO COLUMN 1;
.NOFILL;
.PREFACE 0;
.INDENT 0,0,0;
.SELECT 1;
.TURN ON "\↓_";
.AT "≤≤" TXT "≥" ⊂ }↓_%9TXT%*_↓{ ⊃;
. BEGIN "COVER PAGE TITLE"
. CENTER;
. SELECT 6;
Research Proposal Submitted to the National Science Foundation
. END "COVER PAGE TITLE";
.SKIP 5;
.SELECT 1;
Proposed Amount ≤≤$423,225≥. Proposed Effective Date ≤≤1 July 1981≥. Proposed Duration ≤≤3 years≥.
.SKIP 3;
Title ≤≤Basic Research in Artificial Intelligence≥
.TABS 40;
.SKIP 3;
Principal Investigator ≤≤John McCarthy≥\Submitting Institution ≤≤Stanford University≥
Soc. Sec. No. ≤≤558-30-4793≥\Department ≤≤ Computer Science ≥
\Address ≤≤Stanford, California 94305≥
.SKIP 3;
Make grant to ≤≤Board of Trustees of the Leland Stanford Junior University≥
.SKIP 3;
Endorsements:
.TABS 10,35,59;
.SKIP 1;
\Principal Investigator\Department Head\Institutional Admin. Official
.TABS 10,35,60;
.PREFACE 1;
Name\≤≤John McCarthy ≥\≤≤Edward A. Feigenbaum ≥\≤≤ ≥
.SKIP 1;
Signature\≤≤ ≥\≤≤ ≥\≤≤ ≥
Title\≤≤Professor ≥\≤≤Professor & Chairman ≥\≤≤ ≥
Telephone\≤≤(415) 497-4430 ≥\≤≤(415) 497-4079 ≥\≤≤ ≥
Date\≤≤ ≥\≤≤ ≥\≤≤ ≥
.END "COVER PAGE";
.EVERY HEADING(%3{SECNAME},{SSNAME},{PAGE!}); page←0; twocol; cb Abstract
.fill adjust
This is a request for a grant
for continued support of basic research in artificial intelligence
with emphasis on the structure of formal reasoning, epistemological
problems of artificial intelligence and mathematical theory of computation.
The mathematical theory of computation supports the AI work by
providing tools for reasoning about complex strategies and
showing that they attain their goals.
The line of research being pursued was already outlined in our 1977
proposal, and this proposal incorporates much material from that one.
.cb Epistemological Problems of Artificial Intelligence
Artificial intelligence has proved to be a difficult branch
of science. Some people thought that human-level intelligence
could be achieved in ten or twenty years, but this was based on
the difficulties they could see when they made the optimistic
predictions. Our opinion is that major scientific discoveries
are required before human-level general intelligence can be reached.
Moreover, many of these discoveries require theoretical advances and
not merely extending current ideas for producing "expert programs"
to new domains.
The recent emphasis by ARPA and other agencies sponsoring AI
research on immediate applications has resulted in a serious
imbalance. Deciding what the basic issues are is difficult
enough without having to formulate everything in terms of demonstration
programs to be available in two years. While there has been some
recent increase in understanding of the need for basic research,
the main interest is still on short term goals.
Our research is based on the idea that, for many purposes,
the problems of artificial intelligence can be separated into two
parts - an epistemological part and a heuristic part.
The %2epistemological%1 part concerns what facts and inference
rules are available for solving a problem and how they can be
represented in the memory of a computer, and the heuristic part
concerns procedures for deciding what to do on the basis of the
necessary facts. Most work in AI has concerned %2heuristics%1,
and computer representations of information have been chosen that
are capable of representing only a part of the information that
would be available to a human. The modes of reasoning used by
present programs are often even weaker than the representations
themselves.
The lines of research we plan to pursue are exemplified in
the attached papers (McCarthy 1979a,b,1980, Moore 1977).
Here we shall explain how this work fits together. Our long range goal is
a program that can be told facts about the world and can use them
effectively to achieve the goals it is given.
Sometimes it will use the facts directly from its data base using
deductive and inductive processes like the deductive processes
of mathematical logic. However, we are already sure, (McCarthy 1980),
that conjectural processes will be needed that go beyond deduction
as presently conceived. Sometimes it will use these facts to compile
"expert programs" that use these facts in a more efficient way than
simple reasoning. However, the expert programs will have to defer
to reasoning when unexpected use of the factual data-base is required.
We do not propose to implement such a program immediately -
maybe not at all within the next five years. This is because its
success depends on successful formalization of facts about the world.
We have made progress in this formalization, but we may be occupied
with it exclusively for the forseeable future. In short we will
emphasize theoretical artificial intelligence except for the work
with proof-checkers to be described below.
The main areas of our previous accomplishment and future work
are the following (as now seen):
.item← 0
#. Development of %2circumscription%1 as a mode of reasoning and its
application to artificial intelligence. %2Circumscription%1 formalizes
the process of concluding (often incorrectly) that a certain collection
of facts is all that are relevant for solving a problem. It does this
by allowing one to formally assume that the entitities that are generated
by specified processes are all the entities of a specified kind.
This is common in human reasoning and, for reasons described in
(McCarthy 1980), cannot be accomplished by any form of deduction.
#. Treating concepts as objects. This, described in
(McCarthy 1980), facilitates, and may be required for, reasoning about
knowledge, belief, wants, possibility and necessity.
As we shall explain later, it may be necessary to replace the approach
of that paper by one which makes distinctions of language between objects and
and concepts of them only when required. Not making the distinctions
would be accomplished by some kind of non-monotonic reasoning.
#. The current biggest gap in computer reasoning about the physical world
is the complete lack of a system for reasoning with partial
information about concurrent processes.
All the current problem solving programs assume that each action of the
program produces a next state that depends on the current state, the action,
and sometimes on a random variable. They don't take into account continuous
processes or the fact that other actions and events may be taking place.
We believe that circumscription may be important in formalizing what
people know about such processes.
Little progress has been made by anyone on this problem in the last few
years.
#. We also plan some study of the theory of patterns, especially higher
order patterns in which function variables may be matched and relations
between patterns - for example, the relation between a pattern describing
a type of three-dimensional object such as a person or a vehicle and its
patterns of its perception - such as its projection on a retina as modified
by angle of vision, lighting and occlusion by other objects.
In all this work, the emphasis is on representation of the information
that is actually available to a person or robot with given opportunities to
observe and compute and act.
.cb Recent Advances and Changes in Direction of the Research
Since our 1977 proposal, interest has increased in non-monotonic
reasoning. We organized a mini-conference at Stanford in 1978, and
that led to a special issue of %2Artificial Intelligence%1 containing
updated versions of the papers presented at the conference together
with a paper by Raymond Reiter (1980). Drew McDermott and Jon Doyle
have attacked the problem through what they call %2non-monotonic logic%1
and Raymond Reiter has introduced a %2logic of defaults%1. Our own
approach through circumscription has been further developed in
the published version.
Participation in the study on artificial intelligence and
philosophy held in 1979-80 at the Center for Advanced Study in
Behavioral Sciences has produced somewhat of change in viewpoint
although not yet to definite technical results. Consider the
following problem:
%2Suppose that a law making it a crime to "attempt to bribe
a public official" has been passed, and suppose further that the
law has been enforced with various trials, convictions and appeals
for 20 years before the following questions arise.
(1) Person A attempts to bribe B to help him get a contract
under the impression that B is a private consultant to a government agency.
In fact, B is an official of the agency. A offers the defense that
since he didn't know B was a public official, he couldn't have been
attempting to bribe a public official.
(2) C attempts to bribe D, but D had resigned his position
as a public official before the attempt was made. The defense is
offered that attempting to bribe someone who in fact is not a
public official isn't attempting to bribe a public official.
(3) E lets it be known around town that he will pay $1,000
to any public official who can fix his drunk driving conviction.
There happens to be no official who can fix the conviction. Is
E guilty of attempting to bribe a public official if he isn't
attempting to bribe a specific person? Is it relevant whether there
is a person who can do what E wants done%1?
Such questions are familiar to both philosophers
and lawyers. The distinction between (1) and (2) is the
well known %2de re - de dicto%1 distinction between interpretations
of "attempting to bribe a public official". This distinction and
even more complicated distinctions have been made in court decisions.
A famous 1879 case concerns a cow which was bought on the basis
that the seller thought the cow was infertile and the buyer thought
he could make the cow breed. When he discovered that the cow was
actually pregnant, the seller refused to deliver on the grounds that
he had sold a barren cow. The issue is whether there was a meeting
of minds.
When we try to design an AI system, our interest in such
problems is different from that expressed by either philosophers
or lawyers.
They are interested in making distinctions that have not previously
been made. The primary AI interest should be in the state of mind
of the lawyer or judge who tries cases of attempting to bribe a
public official for twenty years without ever thinking of the
distinction. When the distinction is pointed out, he can understand
that there is a distinction, although he may not have any definite
way of resolving the ambiguity in the law.
(McCarthy xxx) presents a formalism that resolves simple
forms of %2de re - de dicto%1 distinction, and we knew about
the problem of stating that John is seeking a unicorn without
presuming the existence of unicorns and planned to resolve it.
The formalism of that paper allowed both physical entities and
concepts of them to be individuals in a first order theory.
All statements of knowing or wanting or attempting take concepts
of objects, and the %2de re%1 and %2de dicto%1 versions of "attempting
to bribe a public official" would have different forms, and if
the legislature wrote the law in the language of (McCarthy xxx),
they would automatically say one or the other or the conjunction
or the disjunction.
However, it now seems that philosophers and lawyers invent
new distinctions all the time, and we certainly can't revise the
foundation of our language every time such a distinction comes
along. Even if we could, the question still arises of what
was the meaning of what was said before the distinction was
made.
While it is possible that the formalism of (McCarthy xxx)
could cover all cases of interest, this now seems unlikely, and it
is interesting to explore a different approach. This new approach
has other advantages as well.
The idea is complicated and not well developed, but here is the
basic idea.
We still use a first order system, but we have only one
form expression. We have the default rule that equals may
be substituted for equals unless there is a reason why not. In
philosopher's Latin %2Ceteris paribus, de re = de dicto%1. Our
hope is that extending non-monotonic reasoning to say that certain
concepts are not ambiguous unless there is reason to the contrary
will allow AI systems that behave more like people in that respect.
The philosophers have a slogan that doing philosophy should
not depend on doing all science first. Thus understanding what
people mean by the word "fish" should not depend on the philosopher
knowing what distinguishes fish from other vertebrates. It appears
that we need another slogan to the effect that doing AI should
not depend on doing all philosophy first. Thus it should be possible
to design a program that could discuss cases of attempting to
bribe a public official without the programmer knowing about
the ambiguous cases. The program he creates, like the programmer
himself, should be able to understand the distinction when it is
pointed out, but it shouldn't have an automatic way of resolving
the doubtful cases.
We have some examples of using circumscription for this
in a second order formalism. Namely, a suitable circumscription
formula asserts that a function like ⊗Telephone of (McCarthy xxx)
is extensional unless there is a reason why not. However, this
is may not be a sufficiently general way of treating the problem.
McCarthy and interested students will explore this problem
in the next three years.
.cb Proof-Checking by Computer
Computer proof-checking is needed for the AI research, and
its applications to program verification will become practical in the
very near future, but first
we must distinguish it from theorem proving by computer.
It is an essential part of the notion of proof in a formal
system that a candidate proof can be checked by a mechanical process.
It is a consequence of the work of Goedel, Church and Turing that
no mechanical process can always determine whether a proof of a
given sentence exists in a formal system.
In principle, therefore, proof-checking should be easy, while all
the difficulties of understanding the creative processes of a
mathematician are involved in making a high powered theorem prover.
However, in spite of the simplicity of modern logical and set
theoretic systems and the brevity with which they permit axiomatizing
mathematical and physical systems, checking actual proofs
has encountered formidable difficulties. It is easy enough to make
proof-checkers for the formal systems in logic and set theory textbooks,
and we did it some years ago. The difficulty is that the formal proofs of easy
theorems turn out to be ten times longer than informal proofs, and the
excess in length grows the further one advances into the theory. The
trouble is that mathematicians have not succeeded in completely formalizing
the languages and reasoning processes they actually use. Much reasoning
that at first seems to be within a given mathematical system, actually is
metamathematical reasoning about the system. Even when a mathematician
is working within a system, he establishes shortcuts whose repeated use
keeps down the length of a proof.
When we turn to non-mathematical reasoning, present logical
systems are even more inadequate for expressing the reasoning used
in solving problems. We have already identified two such inadequacies:
problem solving requires that conventional deductive reasoning interact
with observation and use computable models, and circumscription as described
above and in (McCarthy 1980), requires new tools. The proof-checker
allows us to verify whether our axioms and conventional rules of inference
together with our proposed new methods are really adequate
to express the reasoning required to solve a problem.
We have a proof-checker called FOL (for first order logic) (Weyhrauch
1977a), we have been improving it since 1973, and we propose to improve
it further under this grant. (Rewriting it from scratch will be required
at some point, but we aren't sure when this will be the best use of
limited manpower). With FOL, we have checked a variety of mathematical
proofs, and each such project has told us something about how to improve
the proof-checker or how to formalize a given mathematical domain or
how to write and debug proofs. The proofs described below
each tested the adequacy of FOL and our axiomatizations.
(a) Any device capable of accepting general mathematical reasoning must be
able to prove theorems of arithmetic. %2"If p is a prime number and
p divides the product ab, then p divides a or p divides b."%1 is a typical
simple example that is not just an induction on the statement of the
problem.
(b) The convenience of doing set-theoretic reasoning in FOL
was tested by interactively proving Lagrange's
theorem and Ramsey's theorem. These are well known theorems of mathematics.
[Lagrange's theorem: the order of a subgroup of a group G divides the order
of G; Ramsey's theorem: let G be a countably infinite set of points, each
point being connected to every other by a thread, each thread being red or
black. Then there is an infinite subset of these points such that the
connecting threads within the subset are all of the same color.]
(c) Wilson's theorem that "If %2p%1 is prime, then %2p%1 divides %2(p-1)!%1"
is a purely arithmetic statement, but its proof uses a
pairing argument (requiring set theory) which is beyond the capability of
present theorem-proving programs.
(d) We checked the first one hundred theorems of set theory as presented
in (Kelley 1955). This established a large collection
of proofs to be used as benchmarks to measure later ideas for shortening
proofs.
(e) The problem of formalizing how we can reason about what other people
know was studied by axiomatizing the "wise man problem" (McCarthy %2et. al%1
1979b) and (Sato 1977). This was perhaps the first extended application
of a formalization of knowledge. It took the fact that
the first and second wise men didn't know the colors of their own spots
as hypotheses. The problem of proving non-knowledge by assuming that
a person only knows what follows from what he is stated to know has not
been treated in the philosophical or AI literature. Chris Goad and John
McCarthy (separately) have attacked this problem, which has turned out
to be quite difficult and deep.
(f) We proved the correctness of the McCarthy-Painter compiler (McCarthy
and Painter 1967). This allowed us to compare the original first order
proof with some more recent proofs (Weyhrauch and Milner 1972; Boyer and
Moore 1975).
After a period of adding new features to FOL we have again begun to
experiment with proofs and our initial successes are encouraging. These
recent improvements have reduced the length of some
proofs by a factor of better than ten.
Some of the proofs we are now producing are about as long as
their informal versions. This is most clearly evident in
the %2samefringe%1 example below. Here are three of our recent
experiments:
Filman (Filman, 1979) has used FOL to show that the
reasoning involved in the solution of a hard retrospective chess problem
can be formalized in first order logic. We chose this example from
outside mathematics, because human reasoning often mixes deduction with
observation of the outside world, and observation of a chess %2partial
position%1 is prominent in this example. The semantic attachment
mechanism of FOL was used to build a simulation of his chess world. He
could then use the semantic simplification routines of FOL to answer (in a
single step) questions like "is the black king in check on board B" by
looking at the model rather than deducing from axioms giving the
positions of the pieces and others about the rules of
chess that black was in check. This single use of semantic attachment
saves several hundred steps over traditional formalizations. Filman's
proof is still much longer than the informal proof, showing that we still
don't fully understand how humans combine observation with deduction.
We have preliminary estimates of the length of the
Kelley set theory proofs mentioned above. In an initial experiment
using only part of the currently available features, we reduced the number
of steps necessary to prove the first thirty-three theorems from 461 to 104.
Considering that it takes one step just to express the theorem, this is
quite impressive. We expect that adding the goal-structure features mentioned
below will substantially reduce these proof lengths.
In connection with McCarthy's recent (McCarthy 1977d)
formalization of LISP programs using sentences and schemata of first order
logic, we have checked a FOL proof of the correctness of his
%2samefringe%1 program. (%2samefringe[x,y]%1 is true if the
S-expressions %2x%1 and %2y%1 have the same atoms in the same order). The proof
as checked by FOL was of the same length as a written out informal proof
of the same result. We believe that such favorable results are generally
possible.
Our plans for FOL include the following:
(1) The verification of the correctness of LISP programs. John
McCarthy (McCarthy 1977d)
has recently begun using axiom schemas to prove the
properties of LISP programs. We intend to expand this work by introducing
meta-mathematical machinery into FOL (see below). This will be followed
by a major effort to use McCarthy's axiomatization of LISP and to prove
properties of simple LISP programs. Weyhrauch has recently pointed out
that this same technique can be used to formulate part of Dana Scott's
computation induction into first order logic. This also requires the
metamathematical machinery. We expect this to be sufficiently practical
for us to use this axiomatization in Stanford LISP courses.
(2) Another aspect of program verification is what are called intensional
properties of programs. These include things like how much storage a
program uses, and how many steps it takes. These questions cannot be
asked by current formalisms for mathematical theory of computation. They
require theories that contain programs as objects and can talk about the
abstract properties of the machines that they run on. Most previous
formal efforts have only shown properties of the algorithms computed by
programs, not properties of programs themselves. One exception was the
work of Aiello, Aiello, and Weyhrauch (Aiello 1974) on PASCAL. This was
carried out using another formal system and we have just begun to think
about such problems using first order logic.
One approach is to take advantage of the fact that intensional properties
of one program are extensional properties of certain related programs.
(3) The verification of the correctness of hardware circuit design using
FOL continuing the work of Wagner (Wagner 1977). Almost all present
hardware verification is based on simulating it in all possible states.
Wagner demonstrated that formal proofs that the hardware is correct are
feasible and require less human and computer work than the simulation
techniques. In fact proofs of hardware correctness are often simpler
than proofs of program correctness, because often mathematical induction
is not required.
(4) We intend to redo the theorems of Kelley mentioned above using
the many new features that have been added to FOL. These include
the syntactic simplifier, the semantic attachment mechanism, and various
decision procedures.
(5) Introducing metamathematical machinery into FOL will allow us to
state and prove theorems about how we do reasoning in the logic.
In particular, we
will be able to establish, as sentences of the metamathematics, new axiom schemas
that can be used in further proofs. This will be especially useful
in proving the correctness of recursive programs. Another application
of schemas is to axiomatizing circumscription.
(6) The usefulness of the metamathematics will be enhanced by
certain reflection principles. These are rules that state some relation
between a theory and its metamathematics. For example, if
you have proved a certain formula, then in the meta-theory you can assert
that the formula is provable. Conversely, informal mathematics often uses
metamathematical assertions that all formulas with certain properties are
true. The attachment mechanism combined with reflection principles
will allow us to automatically use such metatheorems
to prove theorems in the base theory.
.cb Organization of the work
The work is under the general direction of John McCarthy who
also develops formalizations and general theory and uses FOL to
check out formalizations. Richard Weyhrauch is the main developer of
and implementer of FOL and is also pursuing research making metamathematical
methods available in it. Graduate students help with implementations
and pursue thesis research in artificial intelligence (concentrating
on epistemological problems) and in mathematical theory of computation.
The group shares interests with the separately supported groups in
mathematical theory of computation and theorem proving. Both the
epistemology and the proof-checking have excited much outside interest, and
we propose a rotating research associateship for recent PhDs interested in
contributing to the work and learning from it.
.cb Facilities
The project will be part of the Stanford University Artificial
Intelligence Laboratory and will use its computer facilities which have
now been merged with the facilities of the Computer Science Department
for accounting purposes.
The Laboratory is directed by John McCarthy and has been mainly supported
by ARPA in the past, but the fraction of its support provided by ARPA
is diminishing.
.cb Personnel
.cb Biography of John McCarthy
.begin nojust; indent 0,4;
BORN: September 4, 1927 in Boston, Massachusetts
EDUCATION:
.preface 0; crbreak;
B.S. (Mathematics) California Institute of Technology, 1948.
Ph.D. (Mathematics) Princeton University, 1951.
.skip
HONORS AND SOCIETIES:
American Mathematical Society,
Association for Computing Machinery,
Sigma Xi,
Sloan Fellow in Physical Science (1957-59),
ACM National Lecturer (1961),
IEEE,
A.M. Turing Award from Association for Computing Machinery (1971).
.skip
PROFESSIONAL EXPERIENCE:
Proctor Fellow, Princeton University (1950-51),
Higgins Research Instructor in Mathematics, Princeton University (1951-53),
Acting Assistant Professor of Mathematics, Stanford University (1953-55),
Assistant Professor of Mathematics, Dartmouth College (1955-58),
Assistant Professor of Communication Science, M.I.T. (1958-61),
Associate Professor of Communication Science, M.I.T. (1961-62),
Professor of Computer Science Stanford University (1962 - present).
.skip
PROFESSIONAL RESPONSIBILITIES AND SCIENTIFIC INTERESTS:
.crspace;
With Marvin Minsky organized and directed the Artificial Intelligence
Project at M.I.T.
Organized and directs Stanford Artificial Intelligence Project.
Developed the LISP programming system for computing with
symbolic expressions, participated in the development
of the ALGOL 58 and the ALGOL 60 languages.
Present scientific work is in the fields of Artificial
Intelligence, Computation with Symbolic Expressions,
Mathematical Theory of Computation, Time-Sharing computer
systems.
PUBLICATIONS:
.count ref inline; at "⊗" ⊂next ref; ("["&ref&"] ");⊃
. at "<" ⊂"%2"⊃; at ">" ⊂"%1"⊃;
⊗"Towards a Mathematical Theory of Computation", in
<Proc. IFIP Congress 62>, North-Holland, Amsterdam, 1963.
⊗"A Basis for a Mathematical Theory of Computation",
in P. Braffort and D. Hershberg (eds.), <Computer Programming and
Formal Systems>, North-Holland, Amsterdam, 1963.
⊗(with S. Boilen, E. Fredkin, J.C.R. Licklider)
"A Time-Sharing Debugging System for a Small Computer", <Proc.
AFIPS Conf.> (SJCC), Vol. 23, 1963.
⊗(with F. Corbato, M. Daggett) "The Linking
Segment Subprogram Language and Linking Loader Programming
Languages", <Comm. ACM>, July 1963.
⊗"Problems in the Theory of Computation", <Proc. IFIP
Congress 1965>.
⊗"Time-Sharing Computer Systems", in W. Orr (ed.),
<Conversational Computers>, Wiley, 1966.
⊗"A Formal Description of a Subset of Algol", in T.
Steele (ed.), <Formal Language Description Languages for Computer
Programming>, North-Holland, Amsterdam, 1966.
⊗"Information", <Scientific American>, September
1966.
⊗"Computer Control of a Hand and Eye", in <Proc.
Third All-Union Conference on Automatic Control (Technical
Cybernetics)>, Nauka, Moscow, 1967 (Russian).
⊗(with D. Brian, G. Feldman, and J. Allen) "THOR -- A
Display Based Time-Sharing System", <Proc. AFIPS Conf.> (FJCC), Vol.
30, Thompson, Washington, D.C., 1967.
⊗(with James Painter) "Correctness of a Compiler for
Arithmetic Expressions", Amer. Math. Soc., <Proc. Symposia in
Applied Math., Math. Aspects of Computer Science>, New York, 1967.
⊗"Programs with Common Sense", in Marvin Minsky
(ed.), <Semantic Information Processing>, MIT Press, Cambridge, 1968.
⊗(with Lester Earnest, D. Raj. Reddy, Pierre Vicens) "A
Computer with Hands, Eyes, and Ears", <Proc. AFIPS Conf.> (FJCC),
1968.
⊗(with Patrick Hayes) "Some Philosophical Problems from the
Standpoint of Artificial Intelligence", in Donald Michie (ed.),
<Machine Intelligence 4>, American Elsevier, New York, 1969.
⊗"The Home Information Terminal", <Man and Computer,
Proc. Int. Conf., Bordeaux, 1970>, S. Karger, New York, 1972.
⊗"Mechanical Servants for Mankind," in <Britannica Yearbook of
Science and the Future>, 1973.
⊗Book Review: "Artificial Intelligence: A General Survey" by Sir James
Lighthill, in <Artificial Intelligence, Vol. 5, No. 3>, Fall 1974.
⊗"Modeling Our Minds" in <Science Year 1975>, The World Book Science
Annual, Field Enterprises Educational Corporation, Chicago, 1974.
⊗"The Home Information Terminal," invited presentation, AAAS Annual
Meeting, Feb. 18-24, 1976, Boston.
⊗"An Unreasonable Book," a review of <Computer Power and Human Reason>,
by Joseph Weizenbaum (W.H. Freeman and Co., San Francisco, 1976)
in SIGART Newsletter #58, June 1976, also in <Creative Computing>,
Chestnut Hill, Massachusetts, 1976 and in "Three Reviews of
J. Weizenbaum's <Computer Power and Human Reason>, (with B. Buchanan
and J. Lederberg), Stanford Artificial Intelligence Laboratory
Memo 291, Computer Science Department, Stanford, November 1976.
⊗Review: <Computer Power and Human Reason>, by Joseph Weizenbaum (W.H.
Freeman and Co., San Francisco, 1976) in Physics Today, 1977.
⊗"The Home Information Terminal" to appear in The Grolier Encyclopedia,
1977, also to appear in <The International YearBook and Statemen's Who's
Who>, Surrey, England, 1977.
⊗"Dialnet and Home Computers" (with Les Earnest), <Proceedings of the
First West Coast Computer Faire and Conference>, San Francisco, April 1977.
⊗"On The Model Theory of Knowledge" (with M. Sato, S. Igarashi, and
T. Hayashi), <Proceedings of the Fifth International
Joint Conference on Artificial Intelligence>, M.I.T, Cambridge, 1977.
⊗"Another SAMEFRINGE", in SIGART Newsletter No. 61, February 1977.
⊗"Ascribing Mental Qualities to Machines" to appear in <Essays in Philosophy
and Computer Technology>, National Symposium for Philosophy and Computer
Technology, New York, 1977.
⊗"Epistemological Problems of Artificial Intelligence", <Proceedings of the Fifth
International Joint Conference on Artificial Intelligence>, M.I.T., Cambridge, 1977.
.end
.onecol; cb Budget
.begin "budget" verbatim select 5;
PERIOD COVERED: 3 Years: 1 June 1978 through 31 December 1981.
Dates: 6/1/78-5/31/79 6/1/79-5/31/80 6/1/80-5/31/81
Person- Person- Person-
A. SALARIES AND WAGES months months months
1. Senior Personnel:
a. John McCarthy, 24,257. 6.5 27,007. 6.5 29,168. 6.5
Professor
Summer 75%(2 mos.)
Acad. Yr. 50%
2. Other Personnel
a. Student Research
Assistants
(50% Acad.Yr.;
100% Summer)
(1)Carolyn Talcott 7,155. 7.5 7,704. 7.5 8,320. 7.5
(2) 7,155. 7.5 7,704. 7.5 8,320. 7.5
b. Support Personnel
(1) Sec'y (20%) 2,092. 2.4 2,259. 2.4 2,440. 2.4
(2) Sys.Prog.(15%) 2,937. 1.8 3,172. 1.8 3,426. 1.8
_______ _______ _______
Total Salaries & Wages 43,596. 47,846. 51,674.
B. STAFF BENEFITS
9/1/77-8/31/78:19.0%
9/1/78-8/31/79:20.3%
9/1/79-8/31/80;21.6%
9/1/80-8/31/81;22.4%
8,708. 10,179. 11,472.
_______ ________ ________
C. TOTAL SALARIES, WAGES,
AND STAFF BENEFITS 52,304. 58,025. 63,146.
.next page
D. PERMANENT EQUIPMENT 5,000 -- --
(2 Datamedia terminals)
E. EXPENDABLE SUPPLIES 1,632. 1,730. 1,834.
& EQUIPMENT(e.g.,copying,
office supplies, postage,
freight, consulting,
honoraria)
F. TRAVEL 1,840. 1,950. 2,067.
All Domestic Travel
G. PUBLICATIONS 1,000. 1,060. 1,124.
H. OTHER COSTS 6,640. 7,038. 7,460.
1.Communication 1,600.
(telephone)
2. Computer 5,040.
Equip. Maint.
_______ ________ _______
I. TOTAL COSTS 68,416. 69,803. 75,631.
(A through H)
J. INDIRECT COSTS:58% of 36,781. 40,486. 43,866.
A through H, less D. ________ ________ ________
K. TOTAL COSTS 105,197. 110,289. 119,497.
L. THREE YEAR TOTAL 334,983.
.end "budget"
.twocol; bib;
%3Boyer, R.S., and Moore, J.S.%1 (1975) Proving Theorems About LISP Functions,
JACM Vol 22. No. 1 pp. 129-144. New York: ACM.
%3Filman, R.E.%1 (1979) The Interaction of Observation and Inference.
(AIM-327) Stanford University.
%3McCarthy, J. and Painter, J.%1 (1967) Correctness of a Compiler for
Arithmetic Expressions. %2Proc. Symposia in Applied Math., Math. Aspects
of Computer Science%1 New York: Amer. Math. Soc..
%3McCarthy, J. and Hayes, P.J.%1 (1969) Some Philosophical Problems from
the Standpoint of Artificial Intelligence. %2Machine Intelligence 4%1,
pp. 463-502 (eds Meltzer, B. and Michie, D.). Edinburgh: Edinburgh
University Press.
%3McCarthy, J.%1 (1979a) First Order Theories of Individual Concepts and
Propositions. %2Machine Intelligence 9%1, (Ed., Michie, D.). Edinburgh,
Edinburgh University Press.
%3McCarthy, J.%1 (1979b) Ascribing Mental Qualities to Machines.
%2Philosophical Perspectives in Artificial Intelligence%1, (Ed., Ringle, M.).
Harvester Press.
%3Cartwright, Robert and McCarthy, John%1 (1979c) Recursive Programs as
Functions in a First Order Theory. %2Proceedings of the International Conference
on Mathematical Studies of Information Processing%1, Kyoto, Japan.
%3McCarthy, J.%1 (1980) Circumscription - A Form of Non-Monotonic Reasoning.
%2Artificial Intelligence%1, Volume l3, Numbers l, 2, April.
%3McCarthy, J., Sato, M., Hayashi, T. and Igarashi, S.%1 (1977e)
On the Model Theory of Knowledge. Presented at %2IJCAI-1977%1;
to appear in the %2SIGART Newsletter%1.
xxx replace by reference to Moore thesis
%3Moore, Robert C.%1 (1977) Reasoning about Knowledge and Action, %21977
IJCAI Proceedings%1. Available as α<Arpanet:SAILα> IJCAI.PUB[1,RCM].
%3Sato, M.%1 (1977) A study of Kripke-type Models for some Model Logics
by Gentzen's Sequential Method, (to appear in Publ. R.I.M.S., Kyoto University).
%3Wagner, Todd J.%1 (1977) Hardware Verification. Ph.D. thesis, Stanford
University. Available as α<Arpanet:SAILα> THESIS.PUB[THE,TJW].
.end